51 research outputs found

    Volumetric reach-through displays for direct manipulation of 3D content

    Get PDF
    In my PhD, I aim at developing a reach-through volumetric display where points of light are emitted from each 3d position of the display volume, and yet it allows people to introduce theirs hands inside to directly interact with the rendered content. Here, I present TomoLit, an inverse tomographic display, where multiple emitters project rays of different intensities for each angle, rendering a target image in mid-air. We have analysed the effect on image quality of the number of emitters, their locations, the angular resolution and the levels of intensities. We have developed a simple emitter and we are in the process of putting together multiple of them. And what I plan to do next, e.g. moving from 2D to 3D and exploring interaction techniques. The feedback obtained in this symposium will clearly dissipate some of of my doubts and guide my research career.This work has been funded by Government of Navarre (FEDER) 0011-1365-2019-000086; and by JĂłvenes Investigadores UPNA PJUPNA1923

    OpenMPD: A Low-Level Presentation Engine for Multimodal Particle-Based Displays

    Get PDF
    Phased arrays of transducers have been quickly evolving in terms of software and hardware with applications in haptics (acoustic vibrations), display (levitation), and audio. Most recently, Multimodal Particle-based Displays (MPDs) have even demonstrated volumetric content that can be seen, heard, and felt simultaneously, without additional instrumentation. However, current software tools only support individual modalities and they do not address the integration and exploitation of the multi-modal potential of MPDs. This is because there is no standardized presentation pipeline tackling the challenges related to presenting such kind of multi-modal content (e.g., multi-modal support, multi-rate synchronization at 10 KHz, visual rendering or synchronization and continuity). This article presents OpenMPD, a low-level presentation engine that deals with these challenges and allows structured exploitation of any type of MPD content (i.e., visual, tactile, audio). We characterize OpenMPD’s performance and illustrate how it can be integrated into higher-level development tools (i.e., Unity game engine). We then illustrate its ability to enable novel presentation capabilities, such as support of multiple MPD contents, dexterous manipulations of fast-moving particles, or novel swept-volume MPD content

    High-speed acoustic holography with arbitrary scattering objects

    Get PDF
    Recent advances in high-speed acoustic holography have enabled levitation-based volumetric displays with tactile and audio sensations. However, current approaches do not compute sound scattering of objects’ surfaces; thus, any physical object inside can distort the sound field. Here, we present a fast computational technique that allows high-speed multipoint levitation even with arbitrary sound-scattering surfaces and demonstrate a volumetric display that works in the presence of any physical object. Our technique has a two-step scattering model and a simplified levitation solver, which together can achieve more than 10,000 updates per second to create volumetric images above and below static sound-scattering objects. The model estimates transducer contributions in real time by reformulating the boundary element method for acoustic holography, and the solver creates multiple levitation traps. We explain how our technique achieves its speed with minimum loss in the trap quality and illustrate how it brings digital and physical content together by demonstrating mixed-reality interactive applications

    Machine Learning in Predicting Printable Biomaterial Formulations for Direct Ink Writing

    Get PDF
    Three-dimensional (3D) printing is emerging as a transformative technology for biomedical engineering. The 3D printed product can be patient-specific by allowing customizability and direct control of the architecture. The trial-and-error approach currently used for developing the composition of printable inks is time- and resource-consuming due to the increasing number of variables requiring expert knowledge. Artificial intelligence has the potential to reshape the ink development process by forming a predictive model for printability from experimental data. In this paper, we constructed machine learning (ML) algorithms including decision tree, random forest (RF), and deep learning (DL) to predict the printability of biomaterials. A total of 210 formulations including 16 different bioactive and smart materials and 4 solvents were 3D printed, and their printability was assessed. All ML methods were able to learn and predict the printability of a variety of inks based on their biomaterial formulations. In particular, the RF algorithm has achieved the highest accuracy (88.1%), precision (90.6%), and F1 score (87.0%), indicating the best overall performance out of the 3 algorithms, while DL has the highest recall (87.3%). Furthermore, the ML algorithms have predicted the printability window of biomaterials to guide the ink development. The printability map generated with DL has finer granularity than other algorithms. ML has proven to be an effective and novel strategy for developing biomaterial formulations with desired 3D printability for biomedical engineering applications

    DATALEV: Acoustophoretic Data Physicalisation

    Get PDF
    Here, we demonstrate DataLev, a data physicalisation platform with a physical assembly pipeline that allows us to computationally assemble 3D physical charts using acoustically levitated contents. DataLev consists of several enhancement props that allow us to incorporate high-resolution projection, different 3D printed artifacts and multi-modal interaction. DataLev supports reconfigurable and dynamic physicalisations that we animate and illustrate for different chart types. Our work opens up new opportunities for data storytelling using acoustic levitation
    • …
    corecore